Goto

Collaborating Authors

 relative position










Supplementary of VRS Bench: A Versatile Benchmark for Vision Language Understanding of Remote Sensing Images

Neural Information Processing Systems

VRSBench consists of 29,614 remote sensing images with detailed captions, 52,472 object refers, 123,221 visual question-answer pairs. This section documents the dataset in accordance with best practices to ensure transparency, reproducibility, and ethical usage. Images_val.zip contains all raw images in the validation split. Model Evaluation: The dataset can serve as a benchmark for comparing different vision-language models' performance on a standardized set of tasks. These annotations undergo a manual review by human annotators.


LongLLaDA: Unlocking Long Context Capabilities in Diffusion LLMs

Liu, Xiaoran, Song, Yuerong, Liu, Zhigeng, Huang, Zengfeng, Guo, Qipeng, He, Ziwei, Qiu, Xipeng

arXiv.org Artificial Intelligence

Large Language Diffusion Models, or diffusion LLMs, have emerged as a significant focus in NLP research, with substantial effort directed toward understanding their scalability and downstream task performance. However, their long-context capabilities remain unexplored, lacking systematic analysis or methods for context extension. In this work, we present the first systematic investigation comparing the long-context performance of diffusion LLMs and traditional auto-regressive LLMs. We first identify a unique characteristic of diffusion LLMs, unlike auto-regressive LLMs, they maintain remarkably stable perplexity during direct context extrapolation. Moreover, where auto-regressive models fail outright during the Needle-In-A-Haystack task with context exceeding their pretrained length, we discover diffusion LLMs exhibit a distinct local perception phenomenon, enabling successful retrieval from recent context segments. We explain both phenomena through the lens of Rotary Position Embedding (RoPE) scaling theory. Building on these observations, we propose LongLLaDA, a training-free method that integrates LLaDA with the NTK-based RoPE extrapolation. Our results validate that established extrapolation scaling laws remain effective for extending the context windows of diffusion LLMs. Furthermore, we identify long-context tasks where diffusion LLMs outperform auto-regressive LLMs and others where they fall short. Consequently, this study establishes the first length extrapolation method for diffusion LLMs while providing essential theoretical insights and empirical benchmarks critical for advancing future research on long-context diffusion LLMs. The code is available at https://github.com/OpenMOSS/LongLLaDA.